40 research outputs found
Logical Concurrency Control from Sequential Proofs
We are interested in identifying and enforcing the isolation requirements of
a concurrent program, i.e., concurrency control that ensures that the program
meets its specification. The thesis of this paper is that this can be done
systematically starting from a sequential proof, i.e., a proof of correctness
of the program in the absence of concurrent interleavings. We illustrate our
thesis by presenting a solution to the problem of making a sequential library
thread-safe for concurrent clients. We consider a sequential library annotated
with assertions along with a proof that these assertions hold in a sequential
execution. We show how we can use the proof to derive concurrency control that
ensures that any execution of the library methods, when invoked by concurrent
clients, satisfies the same assertions. We also present an extension to
guarantee that the library methods are linearizable or atomic
Forward Invariant Cuts to Simplify Proofs of Safety
The use of deductive techniques, such as theorem provers, has several
advantages in safety verification of hybrid sys- tems; however,
state-of-the-art theorem provers require ex- tensive manual intervention.
Furthermore, there is often a gap between the type of assistance that a theorem
prover requires to make progress on a proof task and the assis- tance that a
system designer is able to provide. This paper presents an extension to
KeYmaera, a deductive verification tool for differential dynamic logic; the new
technique allows local reasoning using system designer intuition about per-
formance within particular modes as part of a proof task. Our approach allows
the theorem prover to leverage for- ward invariants, discovered using numerical
techniques, as part of a proof of safety. We introduce a new inference rule
into the proof calculus of KeYmaera, the forward invariant cut rule, and we
present a methodology to discover useful forward invariants, which are then
used with the new cut rule to complete verification tasks. We demonstrate how
our new approach can be used to complete verification tasks that lie out of the
reach of existing deductive approaches us- ing several examples, including one
involving an automotive powertrain control system.Comment: Extended version of EMSOFT pape
Convex Optimization-based Policy Adaptation to Compensate for Distributional Shifts
Many real-world systems often involve physical components or operating
environments with highly nonlinear and uncertain dynamics. A number of
different control algorithms can be used to design optimal controllers for such
systems, assuming a reasonably high-fidelity model of the actual system.
However, the assumptions made on the stochastic dynamics of the model when
designing the optimal controller may no longer be valid when the system is
deployed in the real-world. The problem addressed by this paper is the
following: Suppose we obtain an optimal trajectory by solving a control problem
in the training environment, how do we ensure that the real-world system
trajectory tracks this optimal trajectory with minimal amount of error in a
deployment environment. In other words, we want to learn how we can adapt an
optimal trained policy to distribution shifts in the environment. Distribution
shifts are problematic in safety-critical systems, where a trained policy may
lead to unsafe outcomes during deployment. We show that this problem can be
cast as a nonlinear optimization problem that could be solved using heuristic
method such as particle swarm optimization (PSO). However, if we instead
consider a convex relaxation of this problem, we can learn policies that track
the optimal trajectory with much better error performance, and faster
computation times. We demonstrate the efficacy of our approach on tracking an
optimal path using a Dubin's car model, and collision avoidance using both a
linear and nonlinear model for adaptive cruise control
Conformance Testing as Falsification for Cyber-Physical Systems
In Model-Based Design of Cyber-Physical Systems (CPS), it is often desirable
to develop several models of varying fidelity. Models of different fidelity
levels can enable mathematical analysis of the model, control synthesis, faster
simulation etc. Furthermore, when (automatically or manually) transitioning
from a model to its implementation on an actual computational platform, then
again two different versions of the same system are being developed. In all
previous cases, it is necessary to define a rigorous notion of conformance
between different models and between models and their implementations. This
paper argues that conformance should be a measure of distance between systems.
Albeit a range of theoretical distance notions exists, a way to compute such
distances for industrial size systems and models has not been proposed yet.
This paper addresses exactly this problem. A universal notion of conformance as
closeness between systems is rigorously defined, and evidence is presented that
this implies a number of other application-dependent conformance notions. An
algorithm for detecting that two systems are not conformant is then proposed,
which uses existing proven tools. A method is also proposed to measure the
degree of conformance between two systems. The results are demonstrated on a
range of models